Bose-Einstein Statistics

${\bf \large \cal Hovhannes \ Grigoryan}\\ {\rm \small New \ York, \ NY}$

Abstract

We study those features of Bose-Einstein statistics that lead to condensation, even in the absence of interaction between bosons. We start by finding all possible single particle states of a 3D harmonic trap bounded by the energy cutoff, and discuss various ways to count the degeneracy for each energy level. We also discuss the difference between partition functions of classical and quantum particles. We study the example of five bosons bounded in a harmonic trap, and find the dependence of condensate fraction on temperature by directly passing through all the states. We extend this to arbitrary number of trapped bosons, using canonical, and grand-canonical formalisms [1].


Content

  • Non-interacting ideal bosons
  • Five boson bounded in a 3D harmonic trap
  • Analytic approach: bosons in canonical ensemble
  • Trapped bosons in grand canonical ensemble
  • Large-N limit in grand canonical ensemble
  • Appendix: Oscillators and Strings

References

[1] We closely follow the outline of lectures by W. Krauth, et al., "Statistical Mechanics: Algorithms and Computations". OUP Oxford, 2006

[2] Marijn A.M, et al., "The Gibbs Paradox and the Distinguishability of Identical Particles ", AJP Vol 79, (2011), https://arxiv.org/abs/1012.4111v2

[3] R. Kubo, "Statistical Mechanics: An Advanced Course with Problems and Solutions", North-Holland Pub. Co (1971)

In [1]:
%%html

<style>
body, p, div.rendered_html { 
    color: black;
    font-family: "Times Roman", sans-serif;
    font-size: 12pt;
}
</style>
In [2]:
import sys, os
import warnings

sys.path.append(os.path.abspath(os.path.join('..')))
warnings.filterwarnings('ignore')

import random
import numpy as np
import pandas as pd
import cmath
from itertools import combinations, product
from collections import OrderedDict
from numba import njit, jit
from sympy import series, symbols, Function, Sum, Product, Poly

import matplotlib.pyplot as plt
from matplotlib import animation, rc, cm
from mpl_toolkits.mplot3d import axes3d, Axes3D

from utils.make_media import *
from utils.html_converter import html_converter

from IPython.display import HTML, Image, clear_output

%precision 20
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

plt.rcParams['figure.figsize'] = (14,8) 
plt.style.use('fivethirtyeight')
rc('animation', html='html5')

Non-interacting ideal bosons

Non-interacting bosons is the only system in physics that can undergo a phase transition without mutual interactions between its components. Let us enumerate the energy eigenstates of a single 3D boson in an harmonic trap by the following program. For simplicity, assume all spring constants are one ($\omega_{x,y,z}=1$), and set the energy of the ground state equal to zero (that is measure energy relative to zero-point energy $\hbar \omega_{x,y,z}/2=1/2$). In this case: $E_{n_x, n_y, n_z} = n_x + n_y + n_z$.

In [3]:
@njit
def energy_states(Emax):
    States = []
    for E_x in range(Emax):
        for E_y in range(Emax):
            for E_z in range(Emax):
                States.append(((E_x + E_y + E_z), (E_x, E_y, E_z)))
    States.sort()
    states = [States[k][:] for k in range(Emax)]
    energies = [States[k][0] for k in range(Emax)]
    distinct_energies = list(set(energies))
    return states, energies, distinct_energies
In [4]:
states, energies, distinct_energies = energy_states(Emax=35)
print("k, E, Ex, Ey. Ex")
for k, s in enumerate(states):
    print(f"{k}, {s[0]}, {s[1]}")
k, E, Ex, Ey. Ex
0, 0, (0, 0, 0)
1, 1, (0, 0, 1)
2, 1, (0, 1, 0)
3, 1, (1, 0, 0)
4, 2, (0, 0, 2)
5, 2, (0, 1, 1)
6, 2, (0, 2, 0)
7, 2, (1, 0, 1)
8, 2, (1, 1, 0)
9, 2, (2, 0, 0)
10, 3, (0, 0, 3)
11, 3, (0, 1, 2)
12, 3, (0, 2, 1)
13, 3, (0, 3, 0)
14, 3, (1, 0, 2)
15, 3, (1, 1, 1)
16, 3, (1, 2, 0)
17, 3, (2, 0, 1)
18, 3, (2, 1, 0)
19, 3, (3, 0, 0)
20, 4, (0, 0, 4)
21, 4, (0, 1, 3)
22, 4, (0, 2, 2)
23, 4, (0, 3, 1)
24, 4, (0, 4, 0)
25, 4, (1, 0, 3)
26, 4, (1, 1, 2)
27, 4, (1, 2, 1)
28, 4, (1, 3, 0)
29, 4, (2, 0, 2)
30, 4, (2, 1, 1)
31, 4, (2, 2, 0)
32, 4, (3, 0, 1)
33, 4, (3, 1, 0)
34, 4, (4, 0, 0)

These 35 states represent different single particle states with energies bounded by $4$. In other words, we have $34$ different types of particles (and no particle state) with energies smaller than $4$. For example, $\sigma_{21}$ denotes a particle with quantum numbers $(0, 1, 3)$.

Naive way to find the degeneracy

In [5]:
@njit
def naive_degeneracy(Emax):
    degen = np.zeros(Emax+1)
    for E_x in range(Emax+1):
        for E_y in range(Emax+1):
            for E_z in range(Emax+1):
                E = (E_x + E_y + E_z)
                if E <= Emax:
                    degen[E] += 1
    return degen
In [6]:
naive_degeneracy(Emax=20).astype(np.int)
Out[6]:
array([  1,   3,   6,  10,  15,  21,  28,  36,  45,  55,  66,  78,  91,
       105, 120, 136, 153, 171, 190, 210, 231])

Single particle degeneracy (3D harmonic trap)

How many particles have energy E? The degeneracy of the energy level $E \sim n$ is

$$\mathcal{N}(E) = \frac{(E+1)(E+2)}{2}.$$

This is essentially the number of ways one can place $E$ indistinguishable balls into $d=3$ distinct bins, $E+d-1 \choose d-1$ = $E+2 \choose 2$. The bins are distinct in this sense: let $E=2$, $d=2$, then the following $3$ partitions are counted as different:

$$(0,2), (1,1), (2,0)$$

In a more formal approach, the number of states of energy $E$ can be determined by first observing that:

$$\mathcal{N}(E)=\sum_{E_x=0}^{E}\sum_{E_y=0}^{E}\sum_{E_z=0}^{E}\delta_{(E_x+E_y+E_z),E},$$

where $\delta_{j,k}$ is the Kronecker delta, and then moving into continuous limit, exchanging Kronecker delta with

$$\delta_{j,k}\rightarrow\delta(j-k) =\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{i(j-k)\lambda} \implies$$

$$\mathcal{N}(E)=\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-i E\lambda}\left(\sum_{E_x=0}^{E}e^{i E_x\lambda}\right)^3 = \int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-i E\lambda}\left[\frac{1-e^{i\lambda (E+1)}}{1-e^{i\lambda}}\right]^3 \\= \frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{\text{d}z}{z^{E+1}}\left[\frac{1-z^{E+1}}{1-z}\right]^3 = \frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{\text{d}z}{z^{E+1}}\left[\frac{1}{2}\sum_{i=0}^{\infty}(i+1)(i+2)z^i \right]\left(1-z^{E+1}\right)^3 = \frac{(E+1)(E+2)}{2},$$

where $z=e^{i\lambda}$. The integration range corresponds to a circular contour $\mathcal{C}$ of radius 1 centered at 0 at the complex plane. Using the residue theorem, this integral can be evaluated by determining the coefficient of the $z^{-1}$ term in the Laurent series of $\frac{1}{z^{n+1}}\left[\frac{1-z^{n+1}}{1-z}\right]^3$.

Below is a way to check that: $1/(1-z)^3 = \left(\sum_{i=0}^{\infty}z^i\right)^3 = \frac{1}{2}\sum_{i=0}^{\infty}(i+1)(i+2)z^i$

In [7]:
x, E, i, k = symbols("x, E, i, k", positive=True, real=True)
f = Function("f")

def f(x, E):
    a = Sum(x ** i, (i, 0, E)).doit()
    b = a ** 3
    return b.doit()

E = 15
series(f(x, E), x, 0, E)
Out[7]:
$\displaystyle 1 + 3 x + 6 x^{2} + 10 x^{3} + 15 x^{4} + 21 x^{5} + 28 x^{6} + 36 x^{7} + 45 x^{8} + 55 x^{9} + 66 x^{10} + 78 x^{11} + 91 x^{12} + 105 x^{13} + 120 x^{14} + O\left(x^{15}\right)$

Another look at single boson degeneracy: generating functions

The generating function that gives the above degeneracy for distinct bosons with energy $E$ can be generalized to arbitrary number of dimensions. For $d$-dimensional harmonic oscillator, the degeneracy can be found through the following generating function:

$$f(x) = (1+x+x^2+x^3+\dots +x^{E})^d = \left[\frac{1 - x^{E+1}}{1 - x}\right]^d $$

Again, the ordering of terms in the sum, $E = E_1 + \dots + E_d$, is relevant. The degeneracy for energy $E$ can be extracted from the derivative (essentially, similar to complex integration given above, if we recall Cauchy's theorem):

$$\frac{1}{E!}\left(\frac{d^E f(x)}{d x^E}\right)_{x=0}$$
In [8]:
def f(x, d, E):
    a = Sum(x ** i, (i, 0, E)).doit()
    b = a ** d
    return b.doit()

E = 15
d = 3
series(f(x, d, E), x, 0, E)
Out[8]:
$\displaystyle 1 + 3 x + 6 x^{2} + 10 x^{3} + 15 x^{4} + 21 x^{5} + 28 x^{6} + 36 x^{7} + 45 x^{8} + 55 x^{9} + 66 x^{10} + 78 x^{11} + 91 x^{12} + 105 x^{13} + 120 x^{14} + O\left(x^{15}\right)$

Combinatoric expression for degeneracy in d-dimensions

For arbitrary number of independent bosonic oscillations (spatial d-dimensions), the number of ways to partition $E$ into $d$ directions is $E+d-1 \choose d-1$ or:

$$\mathcal{N}(E) = \frac{(E+d-1)!}{E! (d-1)!}.$$

In [9]:
import operator as op
from functools import reduce

def n_choose_k(n, k):
    k = min(k, n-k)
    numer = reduce(op.mul, range(n, n-k, -1), 1)
    denom = reduce(op.mul, range(1, k+1), 1)
    return int(numer / denom)
In [10]:
E = 4
d = 9

n_choose_k(n=E+d-1, k=d-1)
Out[10]:
495

Compare this with the code in the previous subsection

In [11]:
series(f(x, d, E), x, 0, E+1)
Out[11]:
$\displaystyle 1 + 9 x + 45 x^{2} + 165 x^{3} + 495 x^{4} + O\left(x^{5}\right)$

Number of distinct ways of representing $n$ as a sum of positive integers.

The order of the terms in the sum is irrelevant. In this case, the generating function is:

$$f(x) = \prod_{k=1}^{\infty}\left[\frac{1}{1 - x^k}\right] = \sum_{n=0}^{\infty}p(n) x^n $$

Here $p(n)$ is the number of distinct ways (order is irrelevant) to represent positive integer $n$ in terms of the sum of arbitrary number of positive integers. For example, if $n=4$, we have $p(4)=5$, and the five partitions are:

$$\{1+1+1+1\}, \ \ \{1+1+2\}, \ \ \{1+3\}, \ \ \{2+2\}, \ \ \{4\}$$

This asymptotic formula obtained by G. H. Hardy and Ramanujan in 1918 is:

$$ p(n) \sim \frac{1}{4 n\sqrt{3}}\exp\left(\pi \sqrt{\frac{2 n}{3}} \right) \ , \ \ \ n \to \infty $$
In [12]:
def f(x, n, Max=100):
    return Product(1/(1 - x ** i), (i, 1, Max)).doit()

n = 15
fx  = series(f(x, n), x, 0, n+1).removeO()
pn = Poly(fx, x).coeffs()[::-1]
np.array(pn)
Out[12]:
array([1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176],
      dtype=object)

Restricted partition

The number $p_k(n)$ of partitions of $n$ into exactly $k$ distinct parts is determined from the following partition function:

$$f(x) = x^k\prod_{j=1}^{k}\left[\frac{1}{1 - x^j}\right] = \sum_{n=0}^{\infty}p_k(n) x^n $$

It is essentially equal to the number of partitions of $n$ in which the largest part has size $k$.

In [13]:
def f(x, k):
    return (x ** k) * Product(1/(1 - x ** i), (i, 1, k)).doit()

def p(n, k, Max = 100):
    fx = series(f(x, k), x, 0, Max+1).removeO()
    pnk = Poly(fx, x).coeffs()[::-1]
    return pnk[n]

n = 9

print(f"p(n, k) = num of partitions of n into k distinct parts")
for k in range(1, n+1):
    print(f"p({n}, {k}) =", p(n, k))
p(n, k) = num of partitions of n into k distinct parts
p(9, 1) = 1
p(9, 2) = 5
p(9, 3) = 12
p(9, 4) = 18
p(9, 5) = 23
p(9, 6) = 26
p(9, 7) = 28
p(9, 8) = 29
p(9, 9) = 30

States and their partial and full degeneracies

In d-dimensions the degeneracy of energy level $E$ will be $E+d-1 \choose d-1$, while the number of distinct (unordered) states will be determined by restricted partition $p(E, d)$.

The code below does the following:

  • For all energies $\epsilon$ up to $E$, the quantum numbers of $d$-dimensional oscillator will be such that their sum is $\epsilon$.
  • For each $\epsilon$ we find the set of states $\{\epsilon_i, i = 1, \dots, d | \sum_{i=1}^{d} \epsilon_i = \epsilon\}$ with their partial degeneracies $m$, number of distinct states for that energy $p(\epsilon, d)$, and the total degeneracy ${\mathcal N}(\epsilon)$:
$$[\{(\epsilon_1, \epsilon_2, \dots, \epsilon_d, m)\}, p(\epsilon, d), {\mathcal N}(\epsilon)]$$

In [14]:
@jit
def state_info(E, n=3):
    """ Finds states and their partial and full degeneracies. 
        params:
            E: (integer) max energy n oscillators can have: E_1 + E_2 + ... + E_n = e for e = 0, ..., E
            n: (integer) number of independent oscillators
        returns: pandas dataframe of energy_dict s.t.
            energy_dict[e] -> [ OrderedDict[(distinct state, state degeneracy), ... , (distinct state, state degeneracy)], 
                                total number of distinct states, 
                                total degeneracy ] 
    """
    energy_dict = OrderedDict()
    # cartesian product 
    for s in product(range(E + 1), repeat=n):
        E = sum(s)
        state = list(s)
        state.sort()
        state = tuple(state)
        if E not in energy_dict:
            states = OrderedDict({state: 1})
            energy_dict[E] = [states, 1, 1]
        else:
            if state not in energy_dict[E][0]:
                energy_dict[E][0][state] = 1
                energy_dict[E][1] += 1
            else:
                energy_dict[E][0][state] += 1
            energy_dict[E][2] += 1
    for k, v in energy_dict.items():
        v[0] = sorted(v[0].items())
    
    pd.set_option('display.max_colwidth', 0)
    energy_df = pd.DataFrame(energy_dict).T
    energy_df.columns = ["states", "distinct_states", "degeneracy"]
    energy_df.index.name = "E"
    return energy_df
In [15]:
%%time 

energy_df = state_info(E=9, n=3)
Wall time: 455 ms
In [16]:
energy_df.head(11)
Out[16]:
states distinct_states degeneracy
E
0 [((0, 0, 0), 1)] 1 1
1 [((0, 0, 1), 3)] 1 3
2 [((0, 0, 2), 3), ((0, 1, 1), 3)] 2 6
3 [((0, 0, 3), 3), ((0, 1, 2), 6), ((1, 1, 1), 1)] 3 10
4 [((0, 0, 4), 3), ((0, 1, 3), 6), ((0, 2, 2), 3), ((1, 1, 2), 3)] 4 15
5 [((0, 0, 5), 3), ((0, 1, 4), 6), ((0, 2, 3), 6), ((1, 1, 3), 3), ((1, 2, 2), 3)] 5 21
6 [((0, 0, 6), 3), ((0, 1, 5), 6), ((0, 2, 4), 6), ((0, 3, 3), 3), ((1, 1, 4), 3), ((1, 2, 3), 6), ((2, 2, 2), 1)] 7 28
7 [((0, 0, 7), 3), ((0, 1, 6), 6), ((0, 2, 5), 6), ((0, 3, 4), 6), ((1, 1, 5), 3), ((1, 2, 4), 6), ((1, 3, 3), 3), ((2, 2, 3), 3)] 8 36
8 [((0, 0, 8), 3), ((0, 1, 7), 6), ((0, 2, 6), 6), ((0, 3, 5), 6), ((0, 4, 4), 3), ((1, 1, 6), 3), ((1, 2, 5), 6), ((1, 3, 4), 6), ((2, 2, 4), 3), ((2, 3, 3), 3)] 10 45
9 [((0, 0, 9), 3), ((0, 1, 8), 6), ((0, 2, 7), 6), ((0, 3, 6), 6), ((0, 4, 5), 6), ((1, 1, 7), 3), ((1, 2, 6), 6), ((1, 3, 5), 6), ((1, 4, 4), 3), ((2, 2, 5), 3), ((2, 3, 4), 6), ((3, 3, 3), 1)] 12 55
10 [((0, 1, 9), 6), ((0, 2, 8), 6), ((0, 3, 7), 6), ((0, 4, 6), 6), ((0, 5, 5), 3), ((1, 1, 8), 3), ((1, 2, 7), 6), ((1, 3, 6), 6), ((1, 4, 5), 6), ((2, 2, 6), 3), ((2, 3, 5), 6), ((2, 4, 4), 3), ((3, 3, 4), 3)] 13 63

Get occupational numbers from the states

Given a state of energy $E$, we want to get an occupation number representation of the same state:

$$(\epsilon_1, \epsilon_2, \dots, \epsilon_n) \to (n_0, n_1, n_2, \dots, n_E)$$

where $n_k$ is the number of particles with energy $k$. For instance, for $E=9$ and $n=3$

$$ (0, 2, 7) \to (1, 0, 1, 0, 0, 0, 0, 1, 0, 0) $$
In [17]:
@jit
def convert(state, Es):
    ns = tuple()
    for i in range(Es + 1):
        ns += (state.count(i),)
    return ns

@jit
def occup_repr(E, n):
    energy_df = state_info(E, n)
    ns = []
    ms = []
    for s in energy_df["states"][E]:
        n = convert(s[0], E)
        m = s[1]
        ns.append(n)
        ms.append(m)
    pd.set_option('display.max_rows', 1000)
    pd.set_option('display.max_column', 1000)
    df = pd.DataFrame(ns).T.iloc[::-1,:]
    df.index.name = "Levels" 
    df.columns = ms
    return df.replace(0, "-")
In [18]:
E=9
n=3

orep = occup_repr(E, n)
orep
Out[18]:
3 6 6 6 6 3 6 6 3 3 6 1
Levels
9 1 - - - - - - - - - - -
8 - 1 - - - - - - - - - -
7 - - 1 - - 1 - - - - - -
6 - - - 1 - - 1 - - - - -
5 - - - - 1 - - 1 - 1 - -
4 - - - - 1 - - - 2 - 1 -
3 - - - 1 - - - 1 - - 1 3
2 - - 1 - - - 1 - - 2 1 -
1 - 1 - - - 2 1 1 1 - - -
0 2 1 1 1 1 - - - - - - -
In [19]:
data = orep.replace("-", np.nan).values[:-1,:].ravel()
plt.hist(data, bins=n)
plt.show()

Distribution of energies (empirical observation)

In [20]:
n = 6
E = 12
energy_df = state_info(E, n)
In [21]:
list_all_states = []
list_all_distinct_states = []

#for E in range(E+1):
for s, m in energy_df["states"][E]:
    ls = list(s * m)
    ls2 = list(s)
    list_all_states += ls
    list_all_distinct_states += ls2
In [22]:
ns = np.arange(E)
c = 1.0/np.sum(np.exp(-n*ns/E))

plt.plot(ns, c * np.exp(-n*ns/E), label="$exp(-n\epsilon_n/E)$")
plt.hist(list_all_states, bins=E, density=True, color='r', alpha=0.3, label="all states")
plt.hist(list_all_distinct_states, bins=E, density=True, color='g', alpha=0.3, label="distinct states")
plt.legend(loc="upper right")
plt.xlabel("$\epsilon_n$ (energy of a single oscillator)")
plt.ylabel("density of state distribution")
plt.title(f"n={n} oscillators with energy E={E}")
#plt.ylim(0,0.1)
plt.show()
In [23]:
plt.plot(ns, c * np.exp(-n*ns/E), label="$exp(-n\epsilon_n/E)$")
plt.hist(list_all_states, bins=E, density=True, color='r', alpha=0.3, label="all states")
plt.hist(list_all_distinct_states, bins=E, density=True, color='g', alpha=0.3, label="distinct states")
plt.legend(loc="upper right")
plt.xlabel("$\epsilon_n$ (energy of a single oscillator)")
plt.ylabel("density of state distribution")
plt.title(f"n={n} oscillators with energy E={E}")
plt.xlim(6, 12)
plt.ylim(0, 0.01)
plt.show()

We observe that compared to the density of all states, the density of distinct states (bosons) shows relative increase in preference for states with more bosons occupying higher energy states (this is why there is also increase in the number of empty states).

Difference between bosons and classical particles

Consider single particle states (in a 3D harmonic trap) with the following set of possible quantum numbers:

  • $\sigma, E, (E_x E_y,E_z)$
  • 0, 0, (0, 0, 0)
  • 1, 1, (0, 0, 1)
  • 2, 1, (0, 1, 0)
  • 3, 1, (1, 0, 0)
  • 4, 2, (0, 0, 2)
  • 5, 2, (0, 1, 1)
  • 6, 2, (0, 2, 0)
  • 7, 2, (1, 0, 1)
  • 8, 2, (1, 1, 0)
  • 9, 2, (2, 0, 0)

We have 10 different single particle states labeled by $\sigma$, and 3 possible energies. The single particle partition function is:

$$Z_1 = \sum_{\sigma \in \{\sigma_0, \dots, \sigma_9\}}e^{-\beta E(\sigma)} = 1 + 3 e^{-\beta} + 6 e^{-2\beta},$$

Two particles can be in either of the following $100$ states:

$$E_1 E_2 = \{00: 1, \ \ 01: 3, \ \ 10: 3, \ \ 11: 9, \ \ 20: 6, \ \ 02: 6, \ \ 21: 18, \ \ 12: 18, \ \ 22: 36\}$$

Distinguishable Particles

The partition function in case of distinguishable particles is:

$$Z^d_2 = \sum_{\sigma, \sigma' \in \{\sigma_0, \dots, \sigma_9\}}e^{-\beta E(\sigma, \sigma')} = 1 + 6 e^{-\beta} + 21 e^{-2\beta} + 36 e^{-3\beta} + 36 e^{-4\beta} = Z^2_1$$

Gibb's paradox (distinguishability of identical particles): If we use similar logic to derive the partition function for an ideal gas, we will face a contradiction. The entropy would not be extensive, and we may violate the 2nd law of thermodynamics. This is known as the Gibb's paradox. Indeed, place a divider in the middle of a box filled with an ideal gas. When the divider is removed, thermodynamic entropy remains the same, because macroscopic properties of the gases do not change, while the calculated statistical entropy of a system would increase. If initial multiplicity is $X^{2 N}$, and final multiplicity is $(2 X)^{2 N}$, then $\Delta S = 2 k_B N \log(2)$. Similarly, entropy can be decreased by the same amount if we put the divider back. Since this can be done without any other change in the environment, this would violate the 2nd law of thermodynamics. To "fix" this problem, the multiplicity or partition function is divided by $N!$, which un-counts states related by particle interchange. Now, the initial multiplicity is $X^{2 N}/(N!)^2$, and final multiplicity is $(2 X)^{2 N}/(2 N)!$, then $\Delta S \to 0$ in the $N \to \infty$ limit. As a result, the "fixed" partition function for $N$ particle state should be: $Z^d_N =Z^N_1/N!$. The $1/N!$ helps to recover correct classical thermodynamics, it does not arise from considerations of real distinguishability.

There are other opinions on the subject, for example, as discussed in Ref. [2], classical particles are always distinguishable by their positions and trajectories, and therefore division by $N!$ is not necessary. On the other hand, identical quantum particles are indistinguishable from the start, and the factor $N!$ is not required. In Ref. [2] authors accept that identical classical particles are distinguishable and that permutation of two of them leads to a different microstate. Classical particles can be identified over time by their different histories. Permutation of two identical classical particles produces a different microstate.

This emerges naturally if we consider classical particles independent and indistinguishable within the grand-canonical formalism shown below. However, we should keep in mind that the grand-canonical partition function is evaluated at a saddle point of canonical partition function (in the large-$N$ limit). If we still treat classical particles as distinguishable but microstates (where particles occupy the same state) indistinguishable, the approximation becomes more coarse, and only valid in cases where particle number is smaller than number of energy states, and temperature or density are such that the number of states where each level is occupied by a single particle is much larger than the number states where two or more particles occupy the same state. In particular, for an ideal gas at high temperatures and low densities it is unlikely to have more than one particle occupying the same state. Moreover, in the classical limit the energy spectrum becomes continuous, and for a continuous spectrum the probability that two particles are in exactly the same state is zero, and so the approximation is exact.

Indistinguishable Particles

In quantum statistics there is no over-counting. If, for example, the system contains 2 oscillators with energy cutoff 1, the only 3 possible many-body states are:

  • $\vert \psi_{E=0} \rangle = \vert 0 \rangle \otimes \vert 0 \rangle$
  • $\vert \psi_{E=\epsilon} \rangle = \frac{1}{\sqrt{2}}\left(\vert 0 \rangle \otimes \vert 1 \rangle + \vert 1 \rangle \otimes \vert 0 \rangle\right)$
  • $\vert \psi_{E=2\epsilon} \rangle = \vert 1 \rangle \otimes \vert 1 \rangle$

In case of quantum particles indistinguishability applies to quantum states, not to particles. Therefore, the particle concept in quantum many-body system is ill defined. This is why the partition function for this system is just:

$$Z_2 = 1 + e^{-\beta \epsilon} + e^{-2\beta \epsilon}.$$

In our example with harmonic trap, because each particle is described by 3 quantum numbers, there are effectively $10$ particle states: $\sigma_0, \dots, \sigma_9$, and 2-particle partition function in case of indistinguishable particles comes from the $\frac{(10 + 2 - 1)!}{2!(10-1)!} = 55$ states,

$$E_1 E_2 = \{00: 1, \ \ 01: 3, \ \ 11: 6, \ \ 02: 6, \ \ 12: 18, \ \ 22: 21\}$$
  • $\vert \psi_{E=0} \rangle = \vert \sigma_0 \rangle \otimes \vert \sigma_0 \rangle$
  • $\vert \psi^{i}_{E=1} \rangle = \frac{1}{\sqrt{2}}\left(\vert \sigma_0 \rangle \otimes \vert \sigma_{i} \rangle + \vert \sigma_{i} \rangle \otimes \vert \sigma_0 \rangle\right), \ \ \ i = 1, 2, 3$
  • $\vert \psi^{i}_{E=2} \rangle = \vert \sigma_i \rangle \otimes \vert \sigma_i \rangle, \ \ \ i = 1, 2, 3$
  • $\vert \psi^{i j}_{E=2} \rangle = \frac{1}{\sqrt{2}}\left(\vert \sigma_i \rangle \otimes \vert \sigma_j \rangle + \vert \sigma_{j} \rangle \otimes \vert \sigma_i \rangle\right), \ \ \ i j = 12, 13, 23$
  • $\vert \psi^{p}_{E=2} \rangle = \frac{1}{\sqrt{2}}\left(\vert \sigma_0 \rangle \otimes \vert \sigma_{p} \rangle + \vert \sigma_{p} \rangle \otimes \vert \sigma_0 \rangle\right), \ \ \ p = 4, 5, \dots, 9$
  • $\vert \psi^{i p}_{E=3} \rangle = \frac{1}{\sqrt{2}}\left(\vert \sigma_i \rangle \otimes \vert \sigma_{p} \rangle + \vert \sigma_{p} \rangle \otimes \vert \sigma_i \rangle\right), \ \ \ i = 1, 2, 3, \ p = 4, 5, \dots, 9$
  • $\vert \psi^{p}_{E=4} \rangle = \vert \sigma_p \rangle \otimes \vert \sigma_p \rangle, \ \ \ p = 4, 5, \dots, 9$
  • $\vert \psi^{p q}_{E=4} \rangle = \frac{1}{\sqrt{2}}\left(\vert \sigma_p \rangle \otimes \vert \sigma_{q} \rangle + \vert \sigma_{q} \rangle \otimes \vert \sigma_p \rangle\right), \ \ \ p > q = 4, 5, \dots, 9 \ \ \implies \ \ 15 \ \ \text{possibilities}$

Therefore, the partition function is:

$$Z^{id}_2 = \sum_{0 \leq \sigma \leq \sigma' \leq 9}e^{-\beta E(\sigma, \sigma')} = 1 + 3 e^{-\beta} + 12 e^{-2\beta} + 18 e^{-3\beta} + 21 e^{-4\beta} \neq Z^2_1/2!$$

In terms of occupation numbers

$$Z^{id}_2 = \sum_{n_0=0}^2 \dots \sum_{n_{9}=0}^2 e^{-\beta (n_0 E_0 + \dots + n_9 E_9)}\delta_{(n_0 + \dots + n_9), 2} = \int^{\pi}_{-\pi} \frac{\text{d}\lambda}{2\pi}e^{-2 i \lambda}\prod_{E=0}^{2}\left(1 + e^{-\beta E + i \lambda} + e^{-2\beta E + 2 i \lambda} \right)^{\mathcal{N}(E)} \\= \int^{\pi}_{-\pi} \frac{\text{d}\lambda}{2\pi}e^{-2 i \lambda}\left(1 + e^{i \lambda} + e^{2 i \lambda} \right)\left(1 + e^{-\beta + i \lambda} + e^{-2\beta + 2 i \lambda} \right)^3 \left(1 + e^{-2\beta + i \lambda} + e^{-4\beta + 2 i \lambda} \right)^6,$$

The exponent in front will only keep the terms quadratic in $e^{i \lambda}$. The code below shows that this approach leads to the same result for partition function as the earlier approach.

In [24]:
b = symbols('e^{-\\beta}')

def f(x):
    a = (1 + x + x**2) * ( (1 + b*x + (b*x)**2) ** 3 ) * ( (1 + b*b*x + (b*b*x)**2) **6 )
    return a

fx = f(x).series(x, 0, 3).removeO()
pn = Poly(fx, x).coeffs()[0]
pn
Out[24]:
$\displaystyle 21 \left(e^{-\beta}\right)^{4} + 18 \left(e^{-\beta}\right)^{3} + 12 \left(e^{-\beta}\right)^{2} + 3 e^{-\beta} + 1$

Using grand-canonical partition function

The single $i$-th energy level grand-canonical partition functions $\zeta_i$ for classical particles (independent and indistinguishable), bosons and fermions are given below:

  • $\zeta^{\text{classical}}_i = \sum_{k=0}^{\infty}\frac{z^k}{k!}e^{-k\beta E_i} = \exp\left(z e^{-\beta E_i} \right)$
  • $\zeta^{\text{bosons}}_i = \sum_{k=0}^{\infty}z^k e^{-k\beta E_i} = \frac{1}{1 - z e^{-\beta E_i}}$
  • $\zeta^{\text{fermions}}_i = \sum_{k=0}^{1}z^k e^{-k\beta E_i} = 1 + z e^{-\beta E_i}$

where $z = e^{\beta \mu}$ is the fugacity, and we define $Z_1 \equiv \sum_{i=0}^{i_{max}} e^{-\beta E_i}$. The full grand-canonical functions will be:

  • $Z^{\text{classical}} = \prod_{i=0}^{i_{max}}\exp\left(z e^{-\beta E_i}\right) = \exp\left(z Z_1\right) = 1 + z Z_1 + z^2 \frac{Z_1^2}{2!} + \dots$
  • $Z^{\text{bosons}} = \prod_{i=0}^{i_{max}}\frac{1}{1 - z e^{-\beta E_i}} = \prod_{E=0}^{E_{max}}\left(1 - z e^{-\beta E}\right)^{-\mathcal{N}(E)} = 1 + z Z_1 + z^2 Z_2^{id} + \dots $
  • $Z^{\text{fermions}} = \prod_{i=0}^{i_{max}}\left(1 + z e^{-\beta E_i} \right) = 1 + z Z_1 + z^2 Z_2^{\text{fermions}} + \dots$

The two particle partition function is the term proportional to $z^2$. Notice, that for indistinguishable classical particles, 2-particle, partition function is $Z^2_1/2$. The code below verifies that we get the same 2-particle partition function for indistinguishable bosons as in the above canonical formalism.

In [25]:
def f(x):
    a = (1 - x) * ( (1 - b*x) **3 ) * ( (1 - b*b*x) **6 )
    return 1/a

fx = f(x).series(x, 0, 3).removeO()
pn = Poly(fx, x).coeffs()[0]
pn
Out[25]:
$\displaystyle 21 \left(e^{-\beta}\right)^{4} + 18 \left(e^{-\beta}\right)^{3} + 12 \left(e^{-\beta}\right)^{2} + 3 e^{-\beta} + 1$

Five boson bounded in a 3D harmonic trap

By "harmonic trap" we mean an isotropic 3D harmonic potential. Consider 5 bosons in the harmonic trap, but with a cutoff on the single-particle energies: $E_\sigma\leq 4$. The degeneracies of single particle energy states with energies $E = 0, 1, 2, 3, 4$ are ${\cal N}(E) = 1, 3, 6, 10, 15$ respectively. In total, there are (non empty) $34$ possible single-particles energy states (each state has 3 quantum numbers for $x, y, z$ coordinates, $(E_{x, i}, E_{y, i}, E_{z, i})$, such that, $E_i = E_{x, i} + E_{y, i} + E_{z, i} \leq 4$). We can label the state of each particle by $\sigma_i$, so that

  • $\{\text{5-particle state}\}=\{\sigma_{i_1},\cdots,\sigma_{i_5}\}$.

Here $\sigma_i$ can be a number from $0$ to $34$, enumerated like in the example above, e.g., if $\sigma_i = 0$, quantum numbers are: $(0,0,0)$, if $\sigma_i=34$, quantum numbers are: $(4, 0, 0)$. On the other hand if $E=3$, $\sigma=10, \dots, 19$. The partition function for this system is given by

  • $Z(\beta)=\sum_{0\leq\sigma_1\leq\cdots\leq\sigma_5\leq 34}e^{-\beta E(\sigma_1,\cdots,\sigma_5)}$.

This partition function takes into account the bosonic nature of particles by taking only distinct (order is irrelevant) sets of five particles. In the following program, the average occupation number of the ground state per particle is calculated.

In [26]:
Energy = [0.0] + [1.0] * 3 + [2.0] * 6 + [3.0] * 10 + [4.0] * 15

beta = 1.0
n_states = 0
Z = 0.0
N0_mean = 0.0
E_mean = 0.0

s_max = 35

# we use the fact that particles are non distinguishable
for s_0 in range(s_max):
    for s_1 in range(s_0, s_max):
        for s_2 in range(s_1, s_max):
            for s_3 in range(s_2, s_max):
                for s_4 in range(s_3, s_max):
                    n_states += 1
                    state = [s_0, s_1, s_2, s_3, s_4]
                    E = sum([Energy[s] for s in state])
                    Z += np.exp(-beta * E)
                    E_mean += E * np.exp(-beta * E)
                    N0_mean += state.count(0) * np.exp(-beta * E)

print(n_states, Z, E_mean / Z / 5.0, N0_mean / Z / 5.0)
575757 17.373297218268803 1.0313326531083176 0.44696950193277984

The number of states is: $(N_{states} + n_{bosons} - 1)!/n_{bosons}!(N_{states}-1)!$. That is why, when $n_{bosons}=5$ and $N_{states} = 35$, number of states is $575757$. Below, we convert the above code into a function.

Generalization

In [27]:
def make_tuples(depth, n, start=0):
    if depth == 0:
        yield ()
    else:
        for x in range(start, n):
            for t in make_tuples(depth - 1, n, x):
                yield (x,) + t

@jit
def bosons_bounded_harmonic(beta, Emax, N):
    """ Here Emax is the energy cutoff for each boson,
        N is the number of bosons in a 3D trap."""
    Energy = []           
    n_states_1p = 0                              # initialise the total number of single trapped boson states
    for n in range(Emax + 1):
        degeneracy = int((n + 1) * (n + 2) / 2)  # degeneracy in the 3D harmonic oscillator
        Energy += [float(n)] * degeneracy    
        n_states_1p += degeneracy

    n_states_Np = 0        # initialise the total number states of N trapped bosons
    Z = 0.0                # initialise the partition function
    N0_mean = 0.0
    E_mean = 0.0 
    
    for state in make_tuples(N, n_states_1p):
        n_states_Np += 1
        E = 0
        for s in state:
            E += Energy[s]                             # calculate the total energy by above enumeration
        # print(state, E)
        Z += np.exp(-beta * E)                         # canonical partition function
        E_mean += E * np.exp(-beta * E)                # avg. total energy
        N0_mean += state.count(0) * np.exp(-beta * E)  # avg. ground level occupation number
    return n_states_Np, Z, E_mean, N0_mean
In [28]:
%%time 

Emax = 4
N = 5       
beta = 1.0  # inverse temperature

n_states_Np, Z, E_mean, N0_mean = bosons_bounded_harmonic(beta, Emax, N)

print('Temperature:', 1 / beta, '\nTotal number of possible states:', n_states_Np, '\nPartition function:', Z,\
        '\nAverage energy per particle:', E_mean / Z / float(N),\
        '\nCondensate fraction (ground state occupation per particle):', N0_mean / Z / float(N))
Temperature: 1.0 
Total number of possible states: 575757 
Partition function: 17.373297218268803 
Average energy per particle: 1.0313326531083176 
Condensate fraction (ground state occupation per particle): 0.44696950193277984
Wall time: 2.78 s

Average ground state occupation number vs temperature

In [29]:
%%time

cond_frac = []
temperature = []

for T in np.linspace(0.1, 1.8, 20):
    n_states_Np, Z, E_mean, N0_mean = bosons_bounded_harmonic(1.0 / T, Emax, N)
    cond_frac.append(N0_mean / Z / float(N))
    temperature.append(T/float(N)**(1/3.0))
Wall time: 1min 2s
In [30]:
plt.rcParams['figure.figsize']=(12, 8) # set the figure size

plt.plot(temperature, cond_frac)
plt.title(f"Condensate fraction of $N$={N} bosons bounded in trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.xlim(0, 1.2)
plt.ylim(0.1, 1.05)
plt.show()

Analytic Approach: Bosons in Canonical Ensemble


Reminder:

  • The microcanonical ensemble consists of a collection of copies of a particular isolated system (one for each state accessible to it) at a particular fixed energy E and particle number N.

  • In canonical ensemble we consider the (closed) system in thermal contact with a reservoir at a temperature T. This ensemble contains copies of the system together with the reservoir (one copy for each allowed state of the combined system). In the canonical ensemble the E varies among members of the ensemble, while the N remains fixed.


What is Bose-Einstein Condensation?

The fact that at very low temperatures all particles are in the ground states is a simple consequence of Boltzmann statistics. In particular, at zero temperature all the particles populate the ground state. Bose-Einstein condensation is something else!

We have a Bose-Einstein condensation if a finite fraction of the system is in the ground-state for temperatures which are much higher than the gap between the ground-state and the first excited state (which is one, in our system). The temperature, where the particle number fraction significantly decreases is $T \sim N^{1/3}$. For large $N$ this temperature is much larger than the energy gap.

Bose-Einstein condensation occurs when all of a sudden a finite fraction of particles populate the single-particle ground state. In a trap, this happens at higher and higher temperatures as we increase the particle number.

Using occupation numbers

Alternatively, in the example of 5 trapped bosons, we can characterize any single particle state $\sigma=0,\cdots,34$ by an occupation number $n_\sigma$. Using this occupation number representation, the energy is given by

$$E=n_0 E_0+\cdots + n_{34}E_{34},$$

and the partition function is

$$Z_5(\beta)=\sum^{N=5}_{n_0=0}\cdots\sum^{N=5}_{n_{34}=0}e^{-\beta(n_0 E_0+\cdots + n_{34}E_{34})}\delta_{(n_0+\cdots + n_{34}),N=5}.$$

We can generalize this for arbitrary $N$, and using integral representation of the Kronecker delta, we will have

$$Z_N(\beta) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{j=0}^{j_\text{max}}\left(\sum_{n_j=0}^Ne^{n_j(-\beta E_j + i \lambda)}\right) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{j=0}^{j_\text{max}}\left(\frac{1 - \Omega^{N+1}}{1 - \Omega}\right) \\= \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)}, $$

where $E_{\text{max}}$ is the maximal allowed energy per boson ($E_{\text{max}}=4$ in the above example), $\mathcal{N}(E) = (E+1)(E+2)/2$, and

$$f_E(\beta,\lambda) = \left\{ \begin{array}{ll} \frac{1 - e^{i(N+1)\lambda}}{1 - e^{i\lambda}} & \mbox{if $E = 0$}\\ \frac{1}{1 - e^{-\beta E + i\lambda}} & \mbox{if $E > 0$ and $\beta EN \gg 1$} \end{array} \right.$$

In this case, the average ground state occupation number can be determined as follows:

$$\langle N_0(\beta, \lambda) \rangle = \frac{1}{Z_N(\beta)}\int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}N_0(\beta, \lambda)\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)},$$

where

$$N_0(\beta, \lambda) = -i\frac{\partial \log f_0}{\partial \lambda} = \left[-\frac{(N+1)e^{i \lambda (N+1)}}{1 - e^{i\lambda (N+1)}} + \frac{e^{i\lambda}}{1-e^{i\lambda}}\right],$$

and $f_k = \sum_{n_k}e^{n_k(-\beta E_k + i\lambda)}$. Similarly,

$$\langle E(\beta, \lambda) \rangle = \frac{1}{Z_N(\beta)}\int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}E(\beta, \lambda)\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)},$$

where

$$E(\beta, \lambda) = -\sum_k \frac{\partial \log f_k}{\partial \beta} = \sum_{E \neq 0} E \mathcal{N}(E)~\frac{e^{-\beta E + i \lambda}}{1 - e^{-\beta E + i \lambda}}.$$

Canonical Bosons

In the canonical ensemble the total number of particles is fixed. While computationally more involved than grand canonical ensemble, it is the preferred framework for computational simulations.

In [31]:
class CanonicalBosons:
    def __init__(self, beta, N, Emax, n_points=1000, verbose=True):
        self.beta = beta
        self.N = N
        self.Emax = Emax
        self.n_points = n_points
        self.verbose = verbose
        self.lams = None
        self.dlam = None
        self.Z = None
        self._initialize()

    def _initialize(self):
        self.lams = np.linspace(-np.pi, np.pi, self.n_points)
        self.dlam = (self.lams[1] - self.lams[0]) / (2.0 * np.pi)
        self.Z = np.real(self.z_part())
        self.N0 = np.real(self.mean_num0())
        self.Emean = np.real(self.mean_energy())
        if self.verbose:
            self.report()

    @staticmethod
    def _e(x):
        return np.exp(1.0j * x)

    @staticmethod
    def N_states(x):
        return (x + 1.0) * (x + 2.0) / 2.0

    def f(self, lam, E):
        a = self._e(lam)
        b = self._e(lam * (self.N + 1))
        if E == 0:
            return (1.0 - b) / (1.0 - a)
        elif E > 0:
            c = np.exp(-self.beta * E)
            return 1.0 / (1.0 - c * a)

    def prod_fs(self, lam):
        prod = 1.0
        for E in range(self.Emax + 1):
            prod *= self.f(lam, E) ** self.N_states(E)
        return prod

    def _calc_mean(self, fn):
        return self.dlam * np.sum([self._e(- lam * self.N) * fn(lam) * self.prod_fs(lam) for lam in self.lams])

    def z_part(self):
        return self._calc_mean(lambda x: 1.0)

    def num0(self, lam):
        a = self._e(-lam)
        b = self._e(-lam * (self.N + 1))
        return (1.0 / (a - 1.0)) - ((self.N + 1) / (b - 1.0))

    def mean_num0(self):
        return self._calc_mean(lambda x: self.num0(x)) / self.Z

    def energy(self, lam):
        a = self._e(-lam)
        eng = 0
        for E in range(1, self.Emax + 1):
            eng += self.N_states(E) * E / (a * np.exp(self.beta * E) - 1.0)
        return eng

    def mean_energy(self):
        return self._calc_mean(lambda x: self.energy(x)) / self.Z

    def report(self):
        print(f'Temperature: {1.0 / self.beta:.7f},'
              f'\nNumber of bosons: {self.N},'
              f'\nSingle boson energy cutoff: {self.Emax},'  
              f'\nPartition function: {self.Z:.7f},'
              f'\nGround state occupation per particle: {self.N0 / self.N:.7f},'
              f'\nAverage energy per particle: {self.Emean / self.N:.7f}')

    def __str__(self):
        return f"<{type(self).__name__}: T={1.0/self.beta:.4f}, number of bosons N={self.N}, maximal energy Emax={self.Emax}>"

Below is a more direct and much faster code

In [32]:
@njit
def canonical_bosons(beta, N, Emax, n_points=1000):
    Z, Eavg, N0 = 0, 0, 0
    lams = np.linspace(-np.pi, np.pi, n_points)
    dlam = (lams[1] - lams[0])/(2 * np.pi)
    for lam in lams:
        a = np.exp(1.0j * lam)
        b = a ** (N + 1)
        f0 = (1.0 - b) / (1.0 - a)
        n0 = (1.0 / (1.0/a - 1.0)) - ((N + 1) / (1.0/b - 1.0))
        Zlam = f0
        Elam = 0.0
        for E in range(1, Emax+1):
            NE = (E+1)*(E+2)/2
            c = np.exp(-beta * E)
            fE = 1.0 / (1.0 - c * a)
            Zlam *= fE ** NE
            Elam += NE * E * c * a / (1.0 - c * a)
        Z += dlam * Zlam * (a  ** (-N))
        Eavg += dlam * Zlam * Elam * (a  ** (-N))
        N0 += dlam * Zlam * n0 * (a  ** (-N))
    Eavg /= Z
    N0 /= Z
    return np.real(N0)/N, np.real(Eavg)/N, np.real(Z)

Condensate fraction $\langle N_0 \rangle /N$ at fixed $E_{max}$ on scaled temperature

In [33]:
def occup_num(beta, N, Emax):
    cb = CanonicalBosons(beta, N, Emax, verbose=False)
    return cb.N0/N, cb.Emean/N
In [34]:
%%time 

Emax = 4
N = 5
cond_frac2 = []
av_eng_pp = []

Ts = np.linspace(0.1, 1.8, 20)
for T in Ts:
    n, e = occup_num(1.0/T, N, Emax)
    cond_frac2.append(n)
    av_eng_pp.append(e)
Wall time: 2.18 s
In [35]:
%%time 

Emax = 4
N = 5
cond_frac3 = []
av_eng_pp3 = []

Ts = np.linspace(0.1, 1.8, 20)
for T in Ts:
    n, e, z = canonical_bosons(beta=1.0/T, N=N, Emax=Emax)
    cond_frac3.append(n)
    av_eng_pp3.append(e)
Wall time: 603 ms
In [36]:
plt.rcParams['figure.figsize']=(8, 6) # set the figure size

ts = Ts/float(N)**(1/3.0)
plt.plot(temperature, cond_frac, 'b-x', lw=0.8, markersize=6, label='direct approach')
plt.plot(ts, cond_frac2, 'r-o', lw=0.5,  markersize=4, label='analytic approach class')
plt.plot(ts, cond_frac3, 'g-.', lw=0.5,  markersize=4, label='analytic approach func')
plt.title(f"Condensate fraction of $N$={N} bosons\n bounded in trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.xlim(0, 1.2)
plt.ylim(0.1, 1.05)
plt.legend(loc="upper right")
plt.show()

Different values of $N$

In [37]:
%%time

Emax = 4
Tmax = 10
NumT= 50


Ts = np.linspace(0.1, 1.8, NumT)
Ns = [5, 25, 50, 100]

cond_frac_dict = {}
av_eng_pp_dict = {}


for N in Ns:
    cond_frac_dict[N] = []
    av_eng_pp_dict[N] = []
    Ts = np.linspace(0.1, Tmax, NumT)
    for T in Ts:
        n, e, z = canonical_bosons(1.0/T, N, Emax)
        cond_frac_dict[N].append(n)
        av_eng_pp_dict[N].append(e)
    print("N =", N)
N = 5
N = 25
N = 50
N = 100
Wall time: 211 ms
In [38]:
fig = plt.figure(figsize=(14, 6))

ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)

for N in Ns:
    ts = Ts /float(N)**(1/3.0)
    ax1.plot(ts, cond_frac_dict[N], '-o', lw=0.5, label=f'$N=${N}')
    ax2.plot(ts, av_eng_pp_dict[N], '-o', lw=0.5, label=f'$N=${N}')

ax1.hlines(0.5, 0, 2.2, colors='k', lw=2.0, label='half line')
ax1.set_title(f"Condensate fraction of $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax1.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax1.set_ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
ax1.set_xlim(0, 2.2)
ax1.set_ylim(0, 1.05)
ax1.legend(loc="best")

ax2.set_title(f"Mean energy per particle for $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax2.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax2.set_ylabel('$\\langle E \\rangle$ / N', fontsize = 20)
ax2.set_xlim(0, 2.2)
ax2.set_ylim(0, 3.05)
ax2.legend(loc="best")
plt.show()
In [39]:
%%time

Emax = 16
Tmax = 10
NumT= 50


Ts = np.linspace(0.1, Tmax, NumT)
Ns = [5, 25, 50, 100]

cond_frac_dict = {}
av_eng_pp_dict = {}


for N in Ns:
    cond_frac_dict[N] = []
    av_eng_pp_dict[N] = []
    for T in Ts:
        n, e, z = canonical_bosons(1.0/T, N, Emax)
        cond_frac_dict[N].append(n)
        av_eng_pp_dict[N].append(e)
    print("N =", N)
N = 5
N = 25
N = 50
N = 100
Wall time: 561 ms
In [40]:
fig = plt.figure(figsize=(14, 6))

ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)

for N in Ns:
    ts = Ts /float(N)**(1/3.0)
    ax1.plot(ts, cond_frac_dict[N], '-o', lw=0.5, label=f'$N=${N}')
    ax2.plot(ts, av_eng_pp_dict[N], '-o', lw=0.5, label=f'$N=${N}')

ax1.hlines(0.5, 0, 2.2, colors='k', lw=2.0, label='half line')
ax1.set_title(f"Condensate fraction of $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax1.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax1.set_ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
ax1.set_xlim(0, 1.2)
ax1.set_ylim(0, 1.05)
ax1.legend(loc="best")

ax2.set_title(f"Mean energy per particle for $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax2.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax2.set_ylabel('$\\langle E \\rangle$ / N', fontsize = 20)
ax2.set_xlim(0, 1.2)
ax2.set_ylim(0, 9.05)
ax2.legend(loc="best")
plt.show()

Dependence of condensate fraction at fixed $N$ on $E_{max}$

In [41]:
%%time

N = 128
Tmax = 20
NumT= 100

Ts = np.linspace(0.1, Tmax, NumT)
Emaxs = [4, 8, 16, 24, 32, 64]

cond_frac_dict = {}
av_eng_pp_dict = {}

for Emax in Emaxs:
    cond_frac_dict[Emax] = []
    av_eng_pp_dict[Emax] = []
    for T in Ts:
        n, e, z = canonical_bosons(1.0/T, N, Emax)
        cond_frac_dict[Emax].append(n)
        av_eng_pp_dict[Emax].append(e)
    print("Emax =", Emax)
Emax = 4
Emax = 8
Emax = 16
Emax = 24
Emax = 32
Emax = 64
Wall time: 2.39 s
In [42]:
fig = plt.figure(figsize=(14, 6))

ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)

for Emax in Emaxs:
    ts = Ts /float(N)**(1/3.0)
    cf = np.array(cond_frac_dict[Emax])
    ae = np.array(av_eng_pp_dict[Emax])
    cf[np.isnan(cf)] = 0
    ae[np.isnan(ae)] = 0
    ax1.plot(ts, cf, '-o', lw=0.5, label=f'$Emax=${Emax}')
    ax2.plot(ts, ae, '-o', lw=0.5, label=f'$Emax=${Emax}')

ax1.hlines(0.5, 0, 2.2, colors='k', lw=2.0, label='half line')
ax1.set_title(f"Condensate fraction of $N$={N} bosons\n bounded in a trap for various" + " $E_{max}$", fontsize = 22)
ax1.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax1.set_ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
ax1.set_xlim(0, 1.01)
ax1.set_ylim(0, 1.05)
ax1.legend(loc="best")

ax2.set_title(f"Mean energy per particle for $N$={N} bosons\n bounded in a trap for various" + " $E_{max}$", fontsize = 22)
ax2.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax2.set_ylabel('$\\langle E \\rangle$ / N', fontsize = 20)
ax2.set_xlim(0, 1.01)
ax2.set_ylim(0, 13.05)
ax2.legend(loc="best")
plt.show()

Dependence of critical temperature $T_c$ at various fixed $E_{max}$ on $N$

In [43]:
from scipy.optimize import fsolve
from functools import partial

def num(T, N, Emax):
    n, _, _ = canonical_bosons(1.0/T, N, Emax)
    return n - 0.5

@jit
def canonical_bosons(beta, N, Emax, n_points=1000):
    Z, Eavg, N0 = 0, 0, 0
    lams = np.linspace(-np.pi, np.pi, n_points)
    dlam = (lams[1] - lams[0])/(2 * np.pi)
    for lam in lams:
        a = np.exp(1.0j * lam)
        b = a ** (N + 1)
        f0 = (1.0 - b) / (1.0 - a)
        n0 = (1.0 / (1.0/a - 1.0)) - ((N + 1) / (1.0/b - 1.0))
        Zlam = f0
        Elam = 0.0
        for E in range(1, Emax+1):
            NE = (E+1)*(E+2)/2
            c = np.exp(-beta * E)
            fE = 1.0 / (1.0 - c * a)
            Zlam *= fE ** NE
            Elam += NE * E * c * a / (1.0 - c * a)
        Z += dlam * Zlam * (a  ** (-N))
        Eavg += dlam * Zlam * Elam * (a  ** (-N))
        N0 += dlam * Zlam * n0 * (a  ** (-N))
    Eavg /= Z
    N0 /= Z
    return np.real(N0)/N, np.real(Eavg)/N, np.real(Z)
In [44]:
%%time

Emaxs = [2, 4, 8, 16, 20, 32]
Ns = range(10, 100, 2)

T_crit_dict = {}
E_avg_dict = {}

for Emax in Emaxs:
    T_crit_dict[Emax] = []
    E_avg_dict[Emax] = []
    for N in Ns:
        f = partial(num, N=N, Emax=Emax)
        starting_guess = np.array([1.0])
        vals = fsolve(f, starting_guess)
        n1, e1, _ = canonical_bosons(1.0/vals[0], N, Emax)
        T_crit_dict[Emax].append(vals[0])
        E_avg_dict[Emax].append(e1)
    print("Emax =", Emax)
Emax = 2
Emax = 4
Emax = 8
Emax = 16
Emax = 20
Emax = 32
Wall time: 9min 34s
In [48]:
fig = plt.figure(figsize=(14, 6))

ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)

for Emax in Emaxs:
    ax1.plot(Ns, T_crit_dict[Emax], '-o', lw=0.5, label=f'$Emax=${Emax}')
    ax2.plot(Ns, E_avg_dict[Emax], '-o', lw=0.5, label=f'$Emax=${Emax}')

ax1.set_title("$T_{crit}$ vs N for various $E_{max}$", fontsize = 22)
ax1.set_xlabel('$N$', fontsize = 20)
ax1.set_ylabel('$T_{crit}$', fontsize = 20)
ax1.legend(loc="best")

ax2.set_title("$\\langle E_{crit} \\rangle$ / N vs N for various $E_{max}$", fontsize = 22)
ax2.set_xlabel('$N$', fontsize = 20)
ax2.set_ylabel('$\\langle E_{crit} \\rangle$ / N', fontsize = 20)
ax2.legend(loc="best")
ax2.set_ylim(0, 3.5)
plt.show()

Fitting

In [52]:
from scipy.optimize import curve_fit

def fit_curve(x, a, b):
    return a * (x ** b)  
    
fig = plt.figure(figsize=(14, 6))

ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)

for Emax in Emaxs[:-1]:
    popt, pcov = curve_fit(fit_curve, Ns, np.array(T_crit_dict[Emax]))
    poptE, pcovE = curve_fit(fit_curve, Ns, np.array(E_avg_dict[Emax]))

    ax1.plot(Ns, np.array(T_crit_dict[Emax]), 'o', lw=1.0, markersize=2, label=f'$Emax=${Emax}')
    ax2.plot(Ns, np.array(E_avg_dict[Emax]), 'o', lw=1.0, markersize=2, label=f'$Emax=${Emax}')
    
    ax1.plot(Ns, fit_curve(Ns, *popt), lw=1.0, label='fit: a=%5.3f, b=%5.3f' % tuple(popt))
    ax2.plot(Ns, fit_curve(Ns, *poptE), lw=1.0, label='fit: a=%5.3f, b=%5.3f' % tuple(poptE))


ax1.set_title("$T_{crit}$ vs N for various $E_{max}$", fontsize = 22)
ax1.set_xlabel('$N$', fontsize = 20)
ax1.set_ylabel('$T_{crit}$', fontsize = 20)
ax1.legend(loc="best")
ax1.set_ylim(1, 9.5)

ax2.set_title("$\\langle E_{crit} \\rangle$ / N vs N for various $E_{max}$", fontsize = 22)
ax2.set_xlabel('$N$', fontsize = 20)
ax2.set_ylabel('$\\langle E_{crit} \\rangle$ / N', fontsize = 20)
ax2.legend(loc="best")
ax2.set_ylim(0.5, 3.0)

plt.show()

As we can see, $T_{crit}$ and $E_{crit}/N$ are approximate functions of $N^{1/3}$ for large values of $E_{max}$ and $N$. Later, we will show that $T_{crit} \sim N^{1/3}$ in the large-N limit. This will justify why we scaled temperature by this factor.

Trapped bosons in grand canonical ensemble

We will consider the $N \to \infty$ limit, in which case the integrals from the canonical ensemble can be calculated using saddle point approximation. It appears that the $\lambda$ saddle point corresponds to chemical potential (energy costs to introduce additional particle into the system).

We want to integrate the following, in the limit $N \to \infty$, using saddle point approximation,

$$Z_N(\beta) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}Z_N(\beta, \lambda) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{j=0}^{j_\text{max}}\left(\sum_{n_j=0}^Ne^{n_j(-\beta E_j + i \lambda)}\right) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)}. $$

Let's find the saddle points of the log of the integrand through the equation:

$$ -i\frac{\partial}{\partial \lambda} \left[-iN\lambda + \sum_{E=0}^{E_\text{max}}\mathcal{N}(E) \log\left(1 - e^{-\beta E + i \lambda}\right) \right] = 0 $$$$ \implies N = \sum_{E=0}^{E_\text{max}}\mathcal{N}(E) \frac{e^{-\beta E + i \lambda}}{1 - e^{-\beta E + i \lambda}} $$

Writing $\lambda = \operatorname{Re}\lambda - i \beta \mu$, where $\mu$ will be shown to play the role of the chemical potential, the partition function at saddle point (where $\operatorname{Re}\lambda = 0$) will be:

$$Z_N(\beta, \mu) = \prod_{\text{single particle states} \ \sigma} \frac{1}{1 - e^{-\beta(E_{\sigma} - \mu)}} $$

This equation describes at the same time the saddle point of the canonical partition function with independent states $\sigma$ with energies $E_{\sigma} - \mu$. This system is called the grand canonical ensemble. The saddle point of the canonical partition function, defines the partition function in the grand canonical ensemble, with fluctuating total particle number.


In grand canonical ensemble the probability of having $k$ particles in state $\sigma$ is:

$$\pi(N_{\sigma} = k) = \frac{e^{-\beta(E_{\sigma} - \mu)k}}{\sum_{N_{\sigma}=0}^{\infty}e^{-\beta(E_{\sigma} - \mu)N_{\sigma}}},$$

therefore, the mean number of particles in state $\sigma$ is:

$$\langle N_{\sigma} \rangle = \frac{e^{-\beta(E_{\sigma} - \mu)}}{1 - e^{-\beta(E_{\sigma} - \mu)}},$$

For the ground state, $E_0=0$ and $k=N_0$, we will have:

$$\pi(N_0) = e^{\beta\mu N_0} - e^{\beta\mu (N_0+1)}, \ \ \implies \ \ \langle N_{0} \rangle = \frac{e^{\beta \mu}}{1 - e^{\beta\mu}}.$$

The mean total number of particles in the harmonic trap, in the grand canonical ensemble, is:

$$\langle N(\mu) \rangle = \sum_{E} \mathcal{N}(E)\frac{e^{-\beta(E - \mu)}}{1 - e^{-\beta(E - \mu)}},$$

The chemical potential denotes the saddle point of the canonical partition function for $N$ particles. It is also the point in the grand canonical system at which $\langle N \rangle = N$.

To summarize, $Z_N(\beta, \lambda)$ when integrated over $\lambda$, gives the exact canonical partition function for $N$ particles. In the complex $\lambda$-plane, this function has a saddle point defined by a relation of $N$ with the chemical potential $\mu$. In the large-$N$ limit, this point dominates the integral for $Z_N$. The saddle point of the canonical ensemble also gives the partition function in the grand canonical ensemble.

In [53]:
@njit
def ZN(z, beta=1.0, N=5, Emax=1000):
    prod = np.exp(-1.0j * N * z)
    for E in range(0, Emax + 1):
        NE = (E+1)*(E+2)/2.0  # for 3D
        a = np.exp(-beta * E + 1.0j * z)
        prod /= (1.0 - a) ** NE
    return prod
In [55]:
%matplotlib inline  
%matplotlib notebook
%pylab

xa, xb = -np.pi, np.pi
ya, yb = -1.0, 1.0

n = 60
x, y = np.mgrid[xa:xb:(1.0j * n), ya:yb:(1.0j * n)]
z = x + 1.0j * y
w = np.abs(ZN(z)) 
w[w > 1e3] = 0

fig = plt.figure(figsize=(12,8))

ax_real = fig.add_subplot(1,1,1, projection='3d')
ax_real.plot_surface(z.real, z.imag, w.real,
                    rstride=1, cstride=1, cmap=cm.jet)
ax_real.plot_wireframe(z.real, z.imag, w.real, rstride=2, cstride=2, alpha=0.4, lw=0.5)
ax_real.set_xlabel("Re($\lambda$)", fontsize=20, labelpad=20)
ax_real.set_ylabel("Im($\lambda$)", fontsize=20, labelpad=20)
ax_real.set_zlabel("$Z_N(\\beta, \lambda)$", fontsize=20, labelpad=20)
ax_real.set_title("Partition function $Z_{N=5}(\\beta=1, \lambda)$ in a complex plane", fontsize=25)

# # plot the imaginary part
# ax_imag = fig.add_subplot(1,2,2,projection='3d')
# ax_imag.plot_surface(z.real, z.imag, w.imag,
#                     rstride=1, cstride=1, cmap=cm.jet)

plt.show()
Using matplotlib backend: nbAgg
Populating the interactive namespace from numpy and matplotlib

Nice way to plot multivalued complex function (unrelated to main content)

In [56]:
%matplotlib inline  
%matplotlib notebook
%pylab

branching_number = 3

Nr = 25
Ntheta = 360

# compute the theta,R domain
theta = np.linspace(0, 2*np.pi * branching_number, Ntheta)
r = np.linspace(0, np.pi, Nr)
Theta, R = np.meshgrid(theta, r)

z = R*np.exp(1.0j*Theta)

# compute w^2 = z. THE KEY IDEA is to pass the exponentiation by 1/2 into exp().
# w = np.sqrt(R) * np.exp(1j*Theta/2)
w = np.log(R) + 1j * Theta

fig = plt.figure(figsize=(12,6))

# plot the real part
ax_real = fig.add_subplot(1,2,1,projection='3d')
ax_real.plot_surface(z.real, z.imag, w.real,
                    rstride=1, cstride=1, cmap=cm.jet, alpha=0.5)
ax_real.plot_wireframe(z.real, z.imag, w.real, rstride=2, cstride=2, alpha=0.2, lw=0.5)
ax_real.set_xlabel("Re(z)", fontsize=20, labelpad=20)
ax_real.set_ylabel("Im(z)", fontsize=20, labelpad=20)
ax_real.set_zlabel("$Re(f(z))$", fontsize=20, labelpad=20)

# plot the imaginary part
ax_imag = fig.add_subplot(1,2,2,projection='3d')
ax_imag.plot_surface(z.real, z.imag, w.imag,
                    rstride=1, cstride=1, cmap=cm.jet, alpha=0.5)
ax_imag.plot_wireframe(z.real, z.imag, w.imag, rstride=2, cstride=2, alpha=0.2, lw=0.5)
ax_imag.set_xlabel("Re(z)", fontsize=20, labelpad=20)
ax_imag.set_ylabel("Im(z)", fontsize=20, labelpad=20)
ax_imag.set_zlabel("$Im(f(z))$", fontsize=20, labelpad=20)

plt.show()
Using matplotlib backend: nbAgg
Populating the interactive namespace from numpy and matplotlib

Finding $\mu$ given $\langle N \rangle$

In [57]:
@jit
def f1(mu, targetN, beta, Emax):
    Nm = 0
    bmu = np.exp(-beta * mu)
    for E in range(Emax + 1):
        NE = (E+1)*(E+2)/2
        bE = np.exp(beta * E)
        Nm += NE / (bmu * bE - 1.0)
    return Nm - targetN

@jit
def get_mu(N, beta, func=f1, Emax=9000):
    f = partial(func, targetN=N, beta=beta, Emax=Emax)
    starting_guess = np.array([-1e-4])
    result = fsolve(f, x0=starting_guess, xtol=1e-18, maxfev=5000)
    return result[0]
In [58]:
beta = 1/10.0

# assertion values to pick the correct initial guess
# Ns = [400, 800, 1000, 1200, 1400, 2000, 5000, 10000]
# mus = [-11.173318075200484, -4.828835689027581, -2.9284713153883755, -1.4806765761017484, -0.4293010203120327, -0.018694934432555214, -0.002831916100988416, -0.001172224854500701]

Ns = np.arange(100, 1000, 50)
mus = []

for n in Ns:
    mu = get_mu(n, beta)
    mus.append(mu)
    
print(mus)
[-24.63942058210083, -20.648882755131268, -17.836936452593505, -15.671171272785147, -13.914459793500066, -12.440332216908182, -11.173318075200484, -10.064758173599355, -9.081449689929931, -8.19973270864873, -7.402161372586891, -6.6755128423738395, -6.009537186851215, -5.396140834203716, -4.828835689027581, -4.302357699048994, -3.812397597399852, -3.3554088398841997]
In [59]:
%matplotlib inline

fig = plt.figure(figsize=(10,8))

plt.plot(Ns, mus)
plt.xlabel("$< N >$", fontsize=22)
plt.ylabel("$\mu$", fontsize=22)
plt.title("Chemical potential vs average number of trapped bosons", fontsize=22)
plt.show()

Condensate fraction vs scaled temperature

In [60]:
@njit
def mean_n0(mu, beta):
    bmu = np.exp(-beta * mu)
    return 1.0/(bmu - 1.0)
In [61]:
Tmax = 10
NumT= 20
Ts = np.linspace(0.1, Tmax, NumT)
Ns = [10**p for p in range(1, 5)][::-1]

cf_n0_dict = {}

for N in Ns:
    cf_n0_dict[N] = []
    for T in Ts:
        beta = 1.0/T
        mu = get_mu(N, beta)
        mn0 = mean_n0(mu, beta)
        cf_n0_dict[N].append(mn0/N)
In [62]:
fig = plt.figure(figsize=(10,8))

for N in Ns:
    cf_n0s = cf_n0_dict[N]
    ts = Ts /float(N)**(1/3.0)
    plt.plot(ts, cf_n0s, '-o', markersize=5, lw=0.7, label=f"N={N}")

plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.title("Condensate fraction vs scaled temperature", fontsize=22)
plt.legend(loc="upper right")
plt.xlim(0.1, 1.1)
plt.show()

Comparing with canonical ensemble result

In [63]:
Tmax = 10
NumT= 20
Ts = np.linspace(0.1, Tmax, NumT)
N = 100
Emax = 200

grand_cond_frac = []
can_cond_frac = []

for T in Ts:
    beta = 1.0/T
    mu = get_mu(N, beta)
    mn0 = mean_n0(mu, beta)
    grand_cond_frac.append(mn0/N)

    n, _, _ = canonical_bosons(beta, N, Emax)
    can_cond_frac.append(n)
    
cf = np.array(can_cond_frac)
cf[np.isnan(cf)] = 0
In [64]:
fig = plt.figure(figsize=(10,8))

ts = Ts /float(N)**(1/3.0)
plt.plot(ts, grand_cond_frac, '-x', markersize=5, lw=0.7, label=f"grand canonical")
plt.plot(ts, cf, '-o', lw=0.7, markersize=5, label=f'canonical')

plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.title(f"Condensate fraction vs scaled temperature ($N=${N})", fontsize=22)
plt.legend(loc="upper right")
plt.xlim(0.1, 1.0)
plt.show()

Large-N limit in grand canonical ensemble

Note that at low-T, chemical potential is negative and is close to zero. We can verify that the occupation of the first excited state changes very little near $\mu = 0$, where the total number of particles diverges. The ground-state occupation changes drastically, when $\mu \to 0$. This is why for all excited states, we may replace the actual occupation numbers of states by the values they would have at $\mu=0$.

Saturation

Saturation number is the maximum mean number of particles that can be put into the excited states:

$$\langle N_{sat} \rangle = \sum_{E \geq 1} \mathcal{N}(E)\frac{e^{-\beta E}}{1 - e^{-\beta E}} = \sum_{E \geq 1} \langle N_{sat}(E) \rangle ,$$

As N increases (and $\mu \to 0$) the true occupation numbers of the excited states approach the saturation numbers $\langle N_{sat}(E) \rangle$. Assume a trap is getting filled with particles. Initially, particles go into excited states until they are all saturated (at $T_c$). After saturation, any additional particles have no choice but to populate the ground state and to contribute to the Bose–Einstein condensate.

High-T

At low-$\beta$, we can substitute the saturation number sum with the integral ($x = \beta E$):

$$\langle N_{sat} \rangle = \sum_{E \geq 1} \frac{(E+1)(E+2)}{2} \frac{e^{-\beta E}}{1 - e^{-\beta E}} = \sum_{E \geq 1} \langle N_{sat}(E) \rangle_{\beta \to 0} \to \frac{1}{\beta^3}\int^{\infty}_0 d x ~x^2 \frac{1}{e^x - 1} = \frac{\zeta(3)}{\beta^3} \approx \frac{1.202}{\beta^3},$$

Saturation numbers in the limit $N \to \infty$ allows us to determine the critical temperature of the ideal Bose gas. At the critical temperature, the saturation number equals the particle number

$$\langle N \rangle = \langle N_{sat} \rangle, \ \ \ T=T_c $$$$\langle N \rangle = \zeta(3)T^3_c \ \ \implies \ \ T_c = \frac{\langle N \rangle^{1/3}}{\zeta(3)^{1/3}} \approx 0.94\langle N \rangle^{1/3}$$

Condensate fraction

Below $T_c$, the difference between the particle number and the saturation number must come from particles in the ground state, so that

$$\langle N_0 \rangle = \langle N \rangle - \langle N_{sat} \rangle \ \ \implies \ \ \frac{\langle N_0 \rangle}{\langle N \rangle} = 1 - \frac{T^3}{T_c^3} $$

In [65]:
@njit
def mean_nk(mu, beta, k):
    bmu = np.exp(-beta * (mu- k))
    return 1.0/(bmu - 1.0)
In [66]:
T = 10
Ns= [2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000] 

mean_N1s = []
mean_N0s = []
mus = []

for N in Ns:
    mu = get_mu(N, 1/T)
    mN1 = mean_nk(mu, 1/T, 1)
    mN0 = mean_nk(mu, 1/T, 0)
    mean_N1s.append(mN1)
    mean_N0s.append(mN0)
    mus.append(mu)
In [67]:
plt.plot(mus, mean_N1s, 'r-x', lw=0.6)
plt.xlabel("$\mu$")
plt.ylabel("$<N_{\sigma=1}>$")
plt.ylim(9.3, 9.51)
plt.show()
In [68]:
plt.plot(mus, mean_N0s, 'b-o', lw=0.6)
plt.xlabel("$\mu$")
plt.ylabel("$<N_0>$")
plt.show()
In [72]:
N = 100

Tmax = 10
NumT= 20
Ts = np.linspace(0.1, Tmax, NumT)
ts = Ts /float(N)**(1/3.0)

cf_theory = 1 - (ts / 0.94) ** 3

#############################################

fig = plt.figure(figsize=(10,8))

plt.plot(ts, grand_cond_frac, '-x', markersize=5, lw=0.7, label=f"grand canonical")
plt.plot(ts, cf, '-o', lw=0.7, markersize=5, label=f'canonical')
plt.plot(ts, cf_theory, '--', label="$1 - T^3/T^3_c$")
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.title(f"Condensate fraction vs scaled temperature ($N=${N})", fontsize=22)
plt.legend(loc="upper right")
plt.xlim(0.1, 1.0)
plt.ylim(0.0, 1.1)

plt.show()

Appendix: Oscillators and Strings

System of almost independent oscillators [3]

Consider a system of $N$ almost independent oscillators with total energy:

$$E = \frac{1}{2}N\hbar \omega + M \hbar \omega, \ \ \ M, N = 0, 1, 2, \dots $$

Let the quantum number of the $i$-th oscillator be $n_i$, then:

$$n_1 + n_2 + \dots + n_N = M $$

Therefore, the thermodynamic weight, $W_M$, is equal to the number of ordered partitions (order matters) of $M$ into $N$ non negative integers:

$$W_M = {M+N-1 \choose N-1},$$

and the entropy (in units $k_B = 1$) is:

$$S = \log W_M \approx (M+N) \log(M+N) - M\log M - N \log N.$$

The temperature of such system is determined from

$$\frac{1}{T} = \frac{\partial S}{\partial E} = \frac{1}{\hbar \omega}\frac{\partial S}{\partial M} = \frac{1}{\hbar \omega}\log \left(\frac{M+N}{M}\right) = \frac{1}{\hbar \omega}\log \left(\frac{E/N + \frac{1}{2}\hbar \omega}{E/N - \frac{1}{2}\hbar \omega}\right).$$

As a result:

$$E = N\left[\frac{1}{2}\hbar \omega + \frac{\hbar \omega}{e^{\hbar \omega/T} - 1}\right]$$

Let a given oscillator has an energy $\epsilon_n = \left(n+\frac{1}{2}\right)\hbar \omega$, then the rest of the system will have an energy $E-\epsilon_n = \frac{N-1}{2}\hbar \omega + \left(M - n\right)\hbar \omega$. Therefore, the thermodynamic weight $W'_{M-n}$ of this remaining system would be:

$$W'_{M-n} = {M+N-n-2 \choose N-2},$$

Therefore, the probability for one of the oscillators to have quantum number $n$ is:

$$P(n) = \frac{W'_{M-n}}{W_M} \approx \frac{M^n N}{(M+N)^{n+1}} = \frac{1}{1 + M/N}\left(\frac{M/N}{1 + M/N}\right)^n = \frac{e^{-\beta n \hbar \omega}}{e^{\beta \hbar \omega} - 1}$$

N.B. Assume $M = 2$ and $N=2$, then microstates $(0, 2)$, $(2, 0)$ and $(1, 1)$ are considered to be distinct. This might seem to be in contradiction with the fact that bosonic states must be invariant under permutations. However, the oscillators under consideration might be described by quantum numbers other than $n_i$, and when these quantum numbers are different, the oscillators are distinct. For instance, the photon can be described by $(n, n_x, n_y, n_z)$, where $n \sim \sqrt{n^2_x + n_y^2 + n^2_z}$ is related to energy, while the last three are related to wavenumber. Now, since there are continuum of wavenumber states with the same energy, probability that any two photons have the same momentum vector is zero. Therefore, all $N$ photons can be assumed to be distinguishable.


The $W_M$ can be obtained using the language of partition functions. The partition function for a single harmonic oscillator is:

$$Z_1(\beta) = \sum_{n=0}^{\infty} e^{-\beta(n+ 1/2)\hbar \omega} = \frac{e^{-\beta \hbar \omega/2}}{1 - e^{-\beta \hbar \omega}}$$

For a system of $N$ (almost independent) oscillators,

$$Z_N(\beta) = Z^N_1(\beta) = \frac{e^{-N\beta \hbar \omega/2}}{\left(1 - e^{-\beta \hbar \omega} \right)^N} = e^{-N\beta \hbar \omega/2} \sum_{M=0}^{\infty} {-N \choose M} (-1)^M e^{-M\beta \hbar \omega} \\= \sum_{M=0}^{\infty} {M+N-1 \choose N-1} e^{-\left(M + N/2 \right)\beta \hbar \omega} = \int e^{-\beta E} \Omega(E) \text{d}E$$$$\Omega(E) = {M+N-1 \choose N-1} \delta\left(E - \left[M + N/2 \right]\hbar \omega\right) $$

The latter implies that there are $W(M) = {M+N-1 \choose N-1}$ states for each energy level $E = \left(M + N/2 \right)\hbar \omega$.

From mathematical point of view, as a problem of partitioning an integer, the generating function is:

$$f(x) = \left[\frac{1 - x^{M+1}}{1 - x}\right]^N $$

which leads to the above result in the limit of large $M$.

System of identical oscillators vs string

Assume that oscillators are such that all microstates that are related by permutations are identical, e.g. microstates $(0, 2)$ and $(2, 0)$ are taken to be identical. This is possible, either in 1+1 space-time dimensions, or when the oscillators do not carry any other distinct quantum numbers. In this case $W^{id}_M$ is the number of partitions of $M$ into distinct (order doesn't matter) $N$ parts: $W^{id}_M = p_N(M)$. Now, since

$$Z^{id}_N(\beta) = e^{-N\beta \hbar \omega/2} \sum_{M=0}^{\infty} p_N(M) e^{-M\beta \hbar \omega} = e^{3 N\beta \hbar \omega/2} \prod_{k=1}^{N} \frac{1}{1 - e^{-k\beta \hbar \omega}},$$

where we used the generating function for restricted partition. From the above partition function it follows that, the energy is:

$$E^{id} = \frac{3 N}{2}\hbar \omega + \sum_{k=1}^{N}\frac{k\hbar\omega}{e^{k\hbar \omega/T} - 1}$$

Notice, that the string partition function, ignoring the zero-point energy is:

$$Z^{\text{string}}(\beta) = \prod_{k=1}^{\infty} \frac{1}{\left(1 - e^{-k\beta \hbar \omega}\right)^b},$$

where $b$ is a number of polarizations, e.g. for D=26, $b=24$.

To summarize, we can observe that the system of identical oscillators (e.g. in 1+1 D), where the order in partition is irrelevant, is equivalent to a system of distinct oscillators with fundamental frequencies $k\hbar \omega$. The latter is equivalent to a system consisting of a single string. This is not surprising, since such a system of bosons on a 1D segment is very much like a string with one polarization.

Convert jupyter notebook into the html file with in a given format

In [ ]:
notebook_file = r"P9_Bose_Einstein_Statistics.ipynb"
html_converter(notebook_file)